The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] visual tracking(29hit)

21-29hit(29hit)

  • The State-of-the-Art in Handling Occlusions for Visual Object Tracking Open Access

    Kourosh MESHGI  Shin ISHII  

     
    SURVEY PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/03/27
      Vol:
    E98-D No:7
      Page(s):
    1260-1274

    This paper reports on the trending literature of occlusion handling in the task of online visual tracking. The discussion first explores visual tracking realm and pinpoints the necessity of dedicated attention to the occlusion problem. The findings suggest that although occlusion detection facilitated tracking impressively, it has been largely ignored. The literature further showed that the mainstream of the research is gathered around human tracking and crowd analysis. This is followed by a novel taxonomy of types of occlusion and challenges arising from it, during and after the emergence of an occlusion. The discussion then focuses on an investigation of the approaches to handle the occlusion in the frame-by-frame basis. Literature analysis reveals that researchers examined every aspect of a tracker design that is hypothesized as beneficial in the robust tracking under occlusion. State-of-the-art solutions identified in the literature involved various camera settings, simplifying assumptions, appearance and motion models, target state representations and observation models. The identified clusters are then analyzed and discussed, and their merits and demerits are explained. Finally, areas of potential for future research are presented.

  • Multi-Task Object Tracking with Feature Selection

    Xu CHENG  Nijun LI  Tongchi ZHOU  Zhenyang WU  Lin ZHOU  

     
    LETTER-Image

      Vol:
    E98-A No:6
      Page(s):
    1351-1354

    In this paper, we propose an efficient tracking method that is formulated as a multi-task reverse sparse representation problem. The proposed method learns the representation of all tasks jointly using a customized APG method within several iterations. In order to reduce the computational complexity, the proposed tracking algorithm starts from a feature selection scheme that chooses suitable number of features from the object and background in the dynamic environment. Based on the selected feature, multiple templates are constructed with a few candidates. The candidate that corresponds to the highest similarity to the object templates is considered as the final tracking result. In addition, we present a template update scheme to capture the appearance changes of the object. At the same time, we keep several earlier templates in the positive template set unchanged to alleviate the drifting problem. Both qualitative and quantitative evaluations demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods.

  • Robust Superpixel Tracking with Weighted Multiple-Instance Learning

    Xu CHENG  Nijun LI  Tongchi ZHOU  Lin ZHOU  Zhenyang WU  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/15
      Vol:
    E98-D No:4
      Page(s):
    980-984

    This paper proposes a robust superpixel-based tracker via multiple-instance learning, which exploits the importance of instances and mid-level features captured by superpixels for object tracking. We first present a superpixels-based appearance model, which is able to compute the confidences of the object and background. Most importantly, we introduce the sample importance into multiple-instance learning (MIL) procedure to improve the performance of tracking. The importance for each instance in the positive bag is defined by accumulating the confidence of all the pixels within the corresponding instance. Furthermore, our tracker can help recover the object from the drifting scene using the appearance model based on superpixels when the drift occurs. We retain the first (k-1) frames' information during the updating process to alleviate drift to some extent. To evaluate the effectiveness of the proposed tracker, six video sequences of different challenging situations are tested. The comparison results demonstrate that the proposed tracker has more robust and accurate performance than six ones representing the state-of-the-art.

  • Robust Visual Tracking Using Sparse Discriminative Graph Embedding

    Jidong ZHAO  Jingjing LI  Ke LU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/19
      Vol:
    E98-D No:4
      Page(s):
    938-947

    For robust visual tracking, the main challenges of a subspace representation model can be attributed to the difficulty in handling various appearances of the target object. Traditional subspace learning tracking algorithms neglected the discriminative correlation between different multi-view target samples and the effectiveness of sparse subspace learning. For learning a better subspace representation model, we designed a discriminative graph to model both the labeled target samples with various appearances and the updated foreground and background samples, which are selected using an incremental updating scheme. The proposed discriminative graph structure not only can explicitly capture multi-modal intraclass correlations within labeled samples but also can obtain a balance between within-class local manifold and global discriminative information from foreground and background samples. Based on the discriminative graph, we achieved a sparse embedding by using L2,1-norm, which is incorporated to select relevant features and learn transformation in a unified framework. In a tracking procedure, the subspace learning is embedded into a Bayesian inference framework using compound motion estimation and a discriminative observation model, which significantly makes localization effective and accurate. Experiments on several videos have demonstrated that the proposed algorithm is robust for dealing with various appearances, especially in dynamically changing and clutter situations, and has better performance than alternatives reported in the recent literature.

  • A Robust Visual Tracker with a Coupled-Classifier Based on Multiple Representative Appearance Models

    Deqian FU  Seong Tae JHANG  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E96-D No:8
      Page(s):
    1826-1835

    Aiming to alleviate the visual tracking problem of drift which reduces the abilities of almost all online visual trackers, a robust visual tracker (called CCMM tracker) is proposed with a coupled-classifier based on multiple representative appearance models. The coupled-classifier consists of root and head classifiers based on local sparse representation. The two classifiers collaborate to fulfil a tracking task within the Bayesian-based tracking framework, also to update their templates with a novel mechanism which tries to guarantee an update operation along the “right” orientation. Consequently, the tracker is more powerful in anti-interference. Meanwhile the multiple representative appearance models maintain features of the different submanifolds of the target appearance, which the target exhibited previously. The multiple models cooperatively support the coupled-classifier to recognize the target in challenging cases (such as persistent disturbance, vast change of appearance, and recovery from occlusion) with an effective strategy. The novel tracker proposed in this paper, by explicit inference, can reduce drift and handle frequent and drastic appearance variation of the target with cluttered background, which is demonstrated by the extensive experiments.

  • A Novel View of Color-Based Visual Tracker Using Principal Component Analysis

    Kiyoshi NISHIYAMA  Xin LU  

     
    LETTER-Vision

      Vol:
    E91-A No:12
      Page(s):
    3843-3848

    An extension of the traditional color-based visual tracker, i.e., the continuously adaptive mean shift tracker, is given for improving the convenience and generality of the color-based tracker. This is achieved by introducing a probability density function for pixels based on the hue histogram of object. As its merits, the direction and size of the tracked object are easily derived by the principle component analysis (PCA), and its extension to three-dimensional case becomes straightforward.

  • Visual Tracking in Occlusion Environments by Autonomous Switching of Targets

    Jun-ichi IMAI  Masahide KANEKO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E91-D No:1
      Page(s):
    86-95

    Visual tracking is required by many vision applications such as human-computer interfaces and human-robot interactions. However, in daily living spaces where such applications are assumed to be used, stable tracking is often difficult because there are many objects which can cause the visual occlusion. While conventional tracking techniques can handle, to some extent, partial and short-term occlusion, they fail when presented with complete occlusion over long periods. They also cannot handle the case that an occluder such as a box and a bag contains and carries the tracking target inside itself, that is, the case that the target invisibly moves while being contained by the occluder. In this paper, to handle this occlusion problem, we propose a method for visual tracking by a particle filter, which switches tracking targets autonomously. In our method, if occlusion occurs during tracking, a model of the occluder is dynamically created and the tracking target is switched to this model. Thus, our method enables the tracker to indirectly track the "invisible target" by switching its target to the occluder effectively. Experimental results show the effectiveness of our method.

  • A Motion/Shape Estimation of Multiple Objects Using an Advanced Contour Matching Technique

    Junghyun HWANG  Yoshiteru OOI  Shinji OZAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:6
      Page(s):
    676-685

    An approach to estimate the information of moving objects is described in terms of their kinetic and static properties such as 2D velocity, acceleration, position, and the size of each object for the features of motion snd shape. To obtain the information of motion/shape of multiple objects, an advanced contour matching scheme is developed, which includes the synthesis of edge images and the analysis of object shape with a high matching confidence as well as a low computation cost. The scheme is composed of three algorithms: a motion estimation by an iterative triple cross-correlation, an image synthesis by shifting and masking the object, and a shape analysis for determining the object size. Implementing fuzzy membership functions to the object shape, the scheme gets improved in accuracy of capturing motion and shape of multiple moving objects. Experimental result shows that the proposed method is valid for several walking men in real scene.

  • An Adaptive Sensing System with Tracking and Zooming a Moving Object

    Junghyun HWANG  Yoshiteru OOI  Shinji OZAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:8
      Page(s):
    926-934

    This paper describes an adaptive sensing system with tracking and zooming a moving object in the stable environment. Both the close contour matching technique and the effective determination of zoom ratio by fuzzy control are proposed for achieving the sensing system. First, the estimation of object feature parameters, 2-dimensional velocity and size, is based on close contour matching. The correspondence problem is solved with cross-correlation in projections extracted from object contours in the specialized difference images. In the stable environment, these contours matching, capable of eliminating occluded contours or random noises as well as background, works well without heavy-cost optical flow calculation. Next, in order to zoom the tracked object in accordance with the state of its shape or movement practically, fuzzy control is approached first. Three sets of input membership function--the confidence of object shape, the variance of object velocity, and the object size--are evaluated with the simplified implementation. The optimal focal length is achieved of not only desired size but safe tracking in combination with fuzzy rule matrix constituted of membership functions. Experimental results show that the proposed system is robust and valid for numerous kind of moving object in real scene with system period 1.85 sec.

21-29hit(29hit)